refactor: Refactor Parquet reader to avoid loading entire file in memory at once#184
Open
Eiko-Tokura wants to merge 2 commits intoDataHaskell:mainfrom
Open
refactor: Refactor Parquet reader to avoid loading entire file in memory at once#184Eiko-Tokura wants to merge 2 commits intoDataHaskell:mainfrom
Eiko-Tokura wants to merge 2 commits intoDataHaskell:mainfrom
Conversation
Read Parquet metadata from the footer and fetch column chunk bytes by seek instead of loading the entire file into memory up front. This keeps the current page decoding path intact while reducing peak memory usage for normal file reads, ensuring that only the column chunks needed are loaded into memory. One column chunk at a time so extra memory is bounded by the size of the column chunk. This is also the first step towards a streaming reader.
mchav
requested changes
Mar 15, 2026
| stm >= 2.5 && < 3, | ||
| filepath >= 1.4 && < 2, | ||
| Glob >= 0.10 && < 1, | ||
| streamly-core, |
Member
There was a problem hiding this comment.
Cabal check fails without version bounds. @adithyaov please advise on bounds.
| { selectedColumns = Nothing | ||
| , predicate = Nothing | ||
| , rowRange = Nothing | ||
| , forceNonSeekable = Nothing |
Member
There was a problem hiding this comment.
Rather than putting this in the public API we'd much rather make a separate testing endpoint.
Something like:
-- production path
readParquetWithOpts opts path = withSeekable path ReadMode (readHelper opts path)
-- This would be the testing function.
_readParquetWithOpts opts path = withFilebuffer path ReadMode (readHelper opts path)I actually don't know if there is a good way to inject behaviour into Haskell tests in this way. Lemme ask around then get back to this but separating the functions seems like a good first start to me.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Read Parquet metadata from the footer and fetch column chunk bytes by seek instead of loading the entire file into memory up front.
This keeps the current page decoding path intact while reducing peak memory usage for normal file reads, ensuring that only the column chunks needed are loaded into memory. One column chunk at a time so extra memory is bounded by the size of the column chunk.
(But for extra memory <= 1 chunk to be completely true, it still depends on the page decoding behavior, whether there are unevaluated thunks referencing a ByteString. I've changed to use Map.Strict for Parquet.hs, After we move to fully streamed reader this should not be a problem.)
This is also the first step towards a streaming reader.
Compatibility with non-seekable source is also maintained
testing added and passed.
Ready for feedbacks
Related issue: #133